Nonconvex Optimization Is Combinatorial Optimization
نویسندگان
چکیده
Difficult nonconvex optimization problems contain a combinatorial number of local optima, making them extremely challenging for modern solvers. We present a novel nonconvex optimization algorithm that explicitly finds and exploits local structure in the objective function in order to decompose it into subproblems, exponentially reducing the size of the search space. Our algorithm’s use of decomposition, branch & bound, and caching solidifies the connection between nonconvex optimization and combinatorial optimization. We discuss preliminary experimental results on protein folding, Gaussian mixture models, and bundle adjustment.
منابع مشابه
An Efficient Neurodynamic Scheme for Solving a Class of Nonconvex Nonlinear Optimization Problems
By p-power (or partial p-power) transformation, the Lagrangian function in nonconvex optimization problem becomes locally convex. In this paper, we present a neural network based on an NCP function for solving the nonconvex optimization problem. An important feature of this neural network is the one-to-one correspondence between its equilibria and KKT points of the nonconvex optimizatio...
متن کاملRecursive Decomposition for Nonconvex Optimization
Continuous optimization is an important problem in many areas of AI, including vision, robotics, probabilistic inference, and machine learning. Unfortunately, most real-world optimization problems are nonconvex, causing standard convex techniques to find only local optima, even with extensions like random restarts and simulated annealing. We observe that, in many cases, the local modes of the o...
متن کاملAn efficient improvement of the Newton method for solving nonconvex optimization problems
Newton method is one of the most famous numerical methods among the line search methods to minimize functions. It is well known that the search direction and step length play important roles in this class of methods to solve optimization problems. In this investigation, a new modification of the Newton method to solve unconstrained optimization problems is presented. The significant ...
متن کاملQuasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization
Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.
متن کاملGlobal behavior of the Douglas-Rachford method for a nonconvex feasibility problem
In recent times the Douglas–Rachford algorithm has been observed empirically to solve a variety of nonconvex feasibility problems including those of a combinatorial nature. For many of these problems current theory is not sufficient to explain this observed success and is mainly concerned with questions of local convergence. In this paper we analyze global behavior of the method for finding a p...
متن کامل